专利摘要:
DISPLAY APPLIANCE AND CONTROL METHOD A new and improved display device capable of preventing visualization by a user in an unfit position for viewing, and a control method, are disclosed. A display device provided with an image capture unit that captures a moving image within a predetermined range in an image display direction is specifically disclosed, an image analysis unit that analyzes the moving image captured by the imaging unit. image capture and calculates the position of an object to be guided to an appropriate viewing position, and a display control unit which, when the position of the object calculated by the image analysis unit is an inappropriate viewing position, makes with a display unit to perform a display to guide the object to the appropriate viewing position.
公开号:BR112012004830B1
申请号:R112012004830-9
申请日:2010-07-22
公开日:2020-10-27
发明作者:Haruo Oba;Yusuke Sakai;Eijiro Mori;Kenichi Okada;Katsunori Tanaka;Shinichi Hayashi;Tomohiko Gotoh;Shingo Tsurumi;Asako Tadenuma
申请人:Sony Corporation;
IPC主号:
专利说明:

Technical Field
[001] The present invention concerns a display device and a control method. Fundamentals of Technique
[002] Recently, with the expansion of a flat screen television set market, there has been an increase in demand for an image display device, such as a large screen television set, which is to be installed in A living room. In such a situation, an image display device including several functions has been proposed. Summary of the Invention Technical problem
[003] Because a user can see an image displayed on the image display device in any position, sometimes the user sees the image displayed on the image display device in an inappropriate position.
[004] For example, sometimes a child approaches a television screen because a child tends to easily overdose broadcast content. When the child continuously sees the image as it approaches the television screen, a focus is fixed, which causes a risk of weakening vision or generating epilepsy. When the child is too close to the television screen, the television set may fall, causing injury to the child. Because children hardly recognize such risks, it is necessary to keep them away from the television screen. Also, possibly they approach the television screen to break a display portion of the television set, and are injured by the display portion of the broken television set, it is necessary to keep them away from the television screen.
[005] In view of the above, an object of the present invention is to provide a new and improved display apparatus and control method capable of preventing the user from viewing the image in the inappropriate viewing position. Solution to the Problem
[006] In accordance with an aspect of the present invention, in order to achieve the aforementioned object, a display apparatus is provided including: an image forming unit that captures a moving image in a predetermined range with respect to a direction image display; an image analyzer that analyzes the moving image captured by the image forming unit, and calculates a position of a target that must be guided to an appropriate viewing position; and a display controller that causes a display unit to display in order to guide the target to the appropriate viewing position when the target position calculated by the image analyzer is in an inappropriate viewing position.
[007] The display controller can have the display unit display a message that guides the target to the appropriate viewing position.
[008] The display controller can cause the display unit to display a graph illustrating a distance between the target and the display unit.
[009] The display controller can decrease the display unit's luminance.
[0010] The display device must also include a sound controller that causes a sound emitting unit to emit a tone in order to guide the target to the appropriate viewing position when the target position detected by the image analyzer is in an inappropriate viewing position.
[0011] When a determination of whether the target should be guided to the appropriate viewing position is made by analyzing the moving image captured by the imaging unit, the image analyzer can make a determination by a combination of a determination whether the target it must be guided to the appropriate viewing position and a determination if the target does not need to be guided to the appropriate viewing position.
[0012] The image analyzer can make a determination using a past determination history when a determination whether the target should be guided to the appropriate viewing position is made by analyzing the moving image captured by the imaging unit.
[0013] The image analyzer can calculate the position using a past calculation history when a target position that must be guided to the appropriate viewing position is calculated by analyzing the moving image captured by the image forming unit.
[0014] In accordance with another aspect of the present invention, in order to achieve the aforementioned objective, a control method is provided including: capturing a moving image in a predetermined range with respect to an image display direction; analyze the captured moving image to calculate a target's position that can be guided to an appropriate viewing position; and having a display unit perform display in order to guide the target to the appropriate viewing position when the calculated target position is in an inappropriate viewing position. Advantageous Effects of the Invention
[0015] As described above, in accordance with the present invention, the new and improved display apparatus and control method capable of preventing the user from viewing the image in an inappropriate viewing position can be provided. Brief description of the drawings
[0016] Fig. 1 is a view illustrating an appearance of an image display apparatus 100 according to an embodiment of the present invention.
[0017] Fig. 2 is a view illustrating a configuration of the image display apparatus 100 according to an embodiment of the present invention.
[0018] Fig. 3 is a view illustrating a configuration of a controller 110.
[0019] Fig. 4 (A) is a view illustrating that case where a user 1 and a user 2 are present in an image formation range of an image formation unit 104, and Fig. 4 (B) is a view illustrating a face detection position [al, bl] and a face size [wl, hl] of user 1, and a face detection position [a2, b2] and a face size [w2, h2] from user 2, which are included in an image captured by the imaging unit 104.
[0020] Fig. 5 (A) is a view illustrating the case where a user is present at a reference distance d0 and a distance dl in the imaging range of the imaging unit 104, Fig. 5 (B ) is a view illustrating the user's face size [wl, hl] at distance dl in the image captured by the imaging unit 104, and Fig. 5 (C) is a view illustrating a reference face size [w0, h0] at the reference distance d0 in the image captured by the imaging unit.
[0021] Fig. 6 is a flow diagram illustrating an example of child approach prevention processing performed by the image display apparatus 100 according to an embodiment of the present invention.
[0022] Figs. 7 (A) to 7 (D) are seen illustrating a method for guiding a child to an optimal viewing position.
[0023] Figs. 8 (A) to 8 (C) are seen illustrating a method for correcting the reference face size [w0, h0] at the reference distance d0 by calculating a user distance.
[0024] Fig. 9 is a view illustrating a fluctuation of time series of information assigned to a user.
[0025] Figs. 10 (A) and 10 (B) are seen illustrating the case where a user's face leaves an angle of view of the imaging unit 104 because the user becomes too close to the imaging unit 104.
[0026] Fig. 11 is a view illustrating a result of a determination if the user is a child.
[0027] Fig. 12 is a view illustrating a method for determining whether the user is a child. Description of Modalities
[0028] In the following, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. Note that, in this specification and drawings, elements that have substantially the same function and structure are denoted with the same reference signs, and repeated explanation is omitted.
[0029] The explanation will be made in the following order. <1. A Modality of the Invention [1-1. Structure of the image display apparatus] [1-2. Controller configuration] [1-3. Child approach prevention processing] <1. One Modality of the Invention> [1-1. Structure of the image display apparatus]
[0030] A configuration of an image display apparatus according to an embodiment of the present invention will be described below. Fig. 1 is a view illustrating an appearance of a display image apparatus 100 of the embodiment. Fig. 1 is a front view of the image display apparatus 100 when viewed from a front side. The appearance of the image display apparatus 100 of the modality will be described below with reference to Fig. 1.
[0031] As shown in Fig. 1, the image display apparatus 100 of the embodiment of the present invention includes image forming units 104 that capture a moving image in an upper central portion and right and left center portions of a panel 102 that displays a still image or a moving image. The imaging unit 104 captures the moving image with respect to the direction in which the image display device 100 displays the still image or moving image on the display panel 102. The image display device 100 of the mode analyzes the image captured by the imaging unit 104, and detects a user face in the image. The image display apparatus 100 analyzes the image of the detected user face to calculate other pieces of characteristic information with age, sex, and the like. Based on the calculated pieces of characteristic information, the image display apparatus 100 determines whether the user is a child or an adult. The image display apparatus 100 also analyzes the image of the detected user face to detect a face detection position and a face side. The image display apparatus 100 calculates a user's position based on a result of determining whether the user is a child or an adult and the detection results in the face detection position and the user's face size. One of the characteristics of the image display device 100 of the modality is that, when the user who is a child is located in an improper position of the image detection device 100, for example, within a range of 1m of the display panel 102 , a display or sound is emitted in order to guide the child to the optimal viewing position, for example, which is at least 1 m away from the display panel 102.
[0032] The image display apparatus 100 of the embodiment of the present invention includes a sensor 106 in a lower central portion of the display panel 102. Sensor 106 detects the presence or absence of a human in front of the image display apparatus 100.
[0033] In Fig. 1, the image display apparatus 100 includes the image forming unit 104 which captures the moving image at three points around the display panel 102. Needless to say, in the present invention, the The location where the imaging unit 104 captures the moving image is not limited to the three points mentioned above. For example, another device is provided independently of the image display device 100, and the device can be connected to the image display device 100 to capture the moving image. The number of imaging units 104 is, of course, not limited to three, but one or two or at least four imaging units 104 must be provided to capture the moving image. The number of sensors 106 is not limited to one, but two or more sensors can be provided.
[0034] Although not shown in Fig. 1, the imaging apparatus 100 may further include a signal receiving unit that can receive a control signal from a remote controller (not shown) in infrared or wireless mode.
[0035] The appearance of the image display apparatus 100 has been described above with reference to Fig. 1. A configuration of the image display apparatus 100 of the present invention will be described below.
[0036] Fig. 2 is a view illustrating the configuration of the image display apparatus 100 of the embodiment of the present invention. The configuration of the image display apparatus 100 of the modality will be described below with reference to Fig. 2.
[0037] As illustrated in Fig. 2, the image display apparatus 100 of the mode includes display panel 102, image forming units 104, sensor 106, a speaker 108, and a controller 110.
[0038] Controller 110 is configured to include a sound input unit 112, an image processor 114, a display state analyzer 116, a display state recorder 118, a system optimization processor 120, and a system controller 122.
[0039] Display panel 102 is an example of a display unit of the present invention, and displays the still image or the moving image based on a panel trigger signal. In the mode, the display panel 102, the still image or the moving image is displayed on a liquid crystal display panel. Needless to say, the display panel 102 is not limited to the liquid crystal display panel. The display panel 102 should display the still image or the moving image using a display device with its own light, such as an organic EL (Electro Luminescence).
[0040] As described above, the imaging units 104 are included in the upper central portion and central right and left portions of the display panel 102 which displays the still image or the moving image. The imaging units 104 capture the moving image with respect to the direction in which the image display apparatus 100 displays the moving image on the display panel 102 when the panel trigger signal is supplied to the display panel 102 and the moving image is displayed on the display panel 102. The imaging unit 104 can capture the moving image using a CCD (Load Coupled Device), or capture the moving image using a CMOS (semiconductor) image sensor. Complementary Metal Oxide). The motion image captured by the imaging unit 104 is transmitted to the controller 110.
[0041] As described above, sensor 106 is included in the lower central portion of display panel 102 which displays the still image or the absence of a human in front of image display apparatus 100. Sensor 106 can detect a distance between the image display device 100 and the human when the human is present in front of the image display device 100. The result of detection and distance information from sensor 106 is transmitted to the counterolder 110. Speaker 108 is an example of the sound emission unit of the invention, and emits the sound based on a sound emission signal.
[0042] Controller 110 controls an operation of the image display apparatus 100. Each unit of controller 110 will be described below.
[0043] The image receiving unit 112 receives the moving image captured by the image forming unit 104. The moving image received by the image receiving unit 112 is transmitted to the image processor 114, and used in image processing performed by the image processor 114.
[0044] Image processor 114 is an example of the image analyzer of the present invention. The image processor 114 performs several parts of the image processing for the moving image, which is captured by the image forming unit 104 and transmitted from the image receiving unit 112. The parts of the image processing performed by the image processor Image 114 includes processing to detect a dynamic body included in the moving image captured by the imaging unit 104, processing detection of the number of humans included in the moving image, and processing of face detection and facial expression which are included in the moving image. Results of various pieces of the image processing performed by the image processor 114 are transmitted to the visualization state analyzer 116, and used to analyze the presence or absence of a person viewing the image display apparatus 100 and a visualization state and a viewing position of the person viewing the image display apparatus 100.
[0045] In the image processor 114, for example, a technology disclosed in Japanese Open Patent Application 2007-65766 (JP-A) or JP-A-2005-44330 can be used as the face detection processing included in Image. Face detection processing will be briefly described below.
[0046] In order to detect the user's face from the image, a face position, a face size, and a face direction are detected in the image provided. When the position and size of the face is detected, a portion of the face image can be cut out of the image. Facial feature portions (facial feature positions) such as an eyebrow, an eye, a nose, and a mouth are detected from the cropped face image and information in the direction of the face. In order to detect facial feature positions, for example, a method called AAM (Active Appearance Models) can be adopted to detect the characteristic position.
[0047] When facial feature positions are detected, an amount of local feature is calculated with respect to each detected facial feature position. The local characteristic quantity is calculated, and the calculated local characteristic quantity is stored along with the face image, which allows the face to be identified from the image captured by the imaging unit 104. For example, a technology disclosed in JP-A-2007-65766 or JP-A-2005-44330 can be used in the face identification method. Thus, the detailed description is omitted here. Whether the face of the image provided is a man or woman, or how old the person is can be determined by the face image and the characteristic facial position. When face information is previously recorded, the person in the image provided is searched for from the recorded and an individual can be identified.
[0048] The visualization analyzer 116 is an example of an image analyzer of the present invention. The visualization analyzer 116 receives the result of several pieces of the image processing performed by the image processor 114 as well as the result of detection and distance information detected by the sensor 106, and analyzes if the person who sees the image displayed by the device image display 100 is a child or adult and the viewing status and viewing position of the person viewing the image using the results of the various pieces of image processing performed by the image processor 114 as well as the result of detection and information distance detector by sensor 106. The viewing state analyzer 116 analyzes whether the person viewing the image displayed by the image display device 100 is a child or an adult and the viewing status and viewing position of the person viewing the image image, which allows the image display apparatus 100 to decrease the luminance of the display panel 102, panel control display contents display 102, and sound content controls based on whether the person viewing the image display apparatus 100 is a child or an adult and the person's viewing position. The analysis result of the analysis processing performed by the visualization status analyzer 116 is transmitted to the visualization status recorder 118 and to the system optimization processor 120.
[0049] The visualization analyzer 116 can detect the dynamic body of the detection result and distance information detected by sensor 106. Alternatively, the dynamic body can be adjusted outside the detection target when the distance between sensor 106 and the dynamic body is greater than a predetermined distance.
[0050] The visualization status recorder 118 records the analysis result that is obtained through analysis processing of the display state analyzer 116. The analysis result of the visualization state analyzer 116, which has been recorded in the visualization recorder visualization state 118, is used in system optimization processing performed by system optimization processor 120. The analysis result of visualization state analyzer 116, which has been recorded in visualization state recorder 118, must be transmitted to an external information port server 200.
[0051] System optimization processor 120 is an example of the image analyzer of the present invention. Using the analysis result that is obtained through analysis processing performed by the visualization state analyzer 116, the system optimization processor 120 calculates system control information in order to perform the system optimization processing for each unit of the device Image display 100. Examples of the system optimization processing performed on each unit of the image display apparatus 100 include luminance control of display panel 102, control of display contents of display panel 102, control of contents output from speaker 108, and volume control.
[0052] Image display apparatus 100 can perform child approach prevention processing based on system control information calculated by system optimization processor 120. System control information calculated by system optimization processor 120 is transmitted to system controller 122.
[0053] System controller 122 is an example of the display controller and the sound controller of the present invention, and performs system optimization processing at each unit of the image display apparatus 100 based on the calculated system control information. by the system optimization processor 120. Specifically, based on the system control information calculated by the system optimization processor 120, the system controller 122 performs the luminance control of the display panel 102, the control of the display contents of the display panel 102, the control of the sound contents emitted from the speaker 108, the volume control of the sound, and the like.
[0054] The configuration of the image display apparatus 100 according to the mode of the present invention has been described above with reference to Fig. 2. In the following, a structure of the controller 110 included in the image display apparatus 100 of the mode will be described in detail below. [1-2. Controller configuration]
[0055] Fig. 3 is a view illustrating the configuration of the controller 110 included in the image display apparatus 100 according to the embodiment of the present invention. Fig. 3 illustrates the configuration of, especially, display status analyzer 116 included in controller 110. The configuration of display status analyzer 116 will be described below with reference to Fig. 3.
[0056] As illustrated in Fig. 3, the viewing state analyzer 116 is configured to include a direction / distance calculator 132 and a user-assigned calculator 134.
[0057] The direction / distance calculator 132 receives the result of several pieces of image processing performed by the image processor 114 as well as pieces of optical information such as the viewing angle and the resolution of the imaging unit 104, and calculates a relative position (direction [ψl, ol] and distance dl) of the user with respect to an optical axis of the imaging unit 104 using the results of the various parts of the image processing performed by the image processor 114 as well as the parts of the optical information in the imaging unit 104. Fig. 4 (A) is a view illustrating the case where a user 1 and a user 2 are present in an image forming range of an imaging unit 104, and Fig. 4 (B) is a view illustrating a face detection position [al, bl] and a face size [wl, hl] of user 1 and a face position [a2, b2] and a face size [ w2, h2] from user 2, which are included in an image captured by u imaging unit 104. Fig. 5 (A) [and a view illustrating the case where a user is present at the reference distance d0 and a distance dl in the image formation range of the imaging unit 104, Fig. 5 (B) is a view illustrating the user's face size [wl, hl] at a distance dl in the image captured by the imaging unit 104, and Fig. (C) is a view illustrating a reference face size [w0, h0] of the user at the reference distance d0 in the image captured by the imaging unit 104.
[0058] As for the direction [$ 1, ol], the horizontal direction: ψl = ψθ * al and the vertical direction: 0 = 00 * bl are calculated from the face detection position [al, bl] normalized by an image size captured [xmax, ymax] and the angle of view [<j) l, 01] of the imaging unit 104. As for the distance dl, the distance: dl = dO * (wO / wl) is calculated from the size f and face reference [wo, h0] at reference distance d0.
[0059] The user characteristic calculator 134 receives the results of various image processing pieces performed by the image processor 114 and pieces of characteristic information, such as a user age, which are obtained by the image processor 114.0 calculator User characteristic 134 determines whether the user is a child or an adult using the results of various pieces of image processing performed by the image processor 114 and pieces of characteristic information obtained by the image processor 114.
[0060] At this point, the image processor 114 transmits the captured image and face determination information (such as face detection position [al, bl], face size [wl, hl], and other pieces of information from characteristics such as age and sex) in each user using the image display apparatus 100 in the captured image for user direction / distance calculator 132 and user characteristic calculator 134 of display status analyzer 116. In the visualization status analyzer 116, user's direction / distance calculator 132 performs analysis of the viewing position of the human viewing the image display device 100 using the information pieces transmitted from the image processor 114. The characteristic calculator 134 performs analysis of whether a human viewing the image display device 100 is a child or an adult using the pieces of information transmitted from the image processor 11 4.
[0061] When a child is located in the inappropriate viewing position of the image display apparatus 100, the system optimization processor 120 calculates system control information for processing (child guidance implementation processing) to guide the child to the appropriate viewing position of the image display apparatus 100 using the results of the analysis processing pieces that are performed by the user's direction / distance calculator 132 and the user characteristic calculator 134. Examples of processing being able to guide the child to the appropriate viewing position of the image display apparatus 100 includes processing the display of a guide message in an optimal viewing position on the display panel 102 while lowering the luminance of the display panel 102 and processing the display of a display chart. an approach distance and an alert message on the display panel 102. Examples of the being able to guide the child to an appropriate viewing position of the image display apparatus 100 also includes processing the display of the guide message into the optimal viewing position on the display panel 102 while darkening the display panel 102 and processing the production of the alert sound from speaker 108. System control information for child guidance implementation processing, which was calculated by system optimization processor 120, is transmitted to system controller 122 and used in prevention processing child approach.
[0062] The structure of the controller 110 included in the image display apparatus 100 according to the mode of the present invention has been described above with reference to Fig. 3. Next, the child approach prevention processing carried out by the display apparatus 100 of the modality will be described below. [1-3. Child approach prevention processing]
[0063] Fig. 6 is a flow diagram illustrating an example of child approach prevention processing performed by the image display apparatus 100 according to an embodiment of the present invention. The child approach prevention processing performed by the image display apparatus 100 of the modality will be described below with reference to Fig. 6.
[0064] Referring to Fig. 6, when the image forming unit 104 of the image display device 100 starts capturing the image, the image processor 114 of the image display device 100 performs the included face detection processing in the image captured by the imaging unit 104, and recognizes the face of the person viewing the image displayed by the image display apparatus 100 (Step S602).
[0065] Then, using the pieces of information transmitted from the image processor 113, the visualization state analyzer 116 of the image display apparatus 100 performs the processing of analyzing the viewing position of the human who sees the image display apparatus 100 and processing to analyze whether the human is a child or an adult. Using the result of the analysis processing performed by the visualization analyzer 116, the system optimization processor 120 of the image display device 100 determines whether the child is located in the improper display position of the image display device 100, i.e. that is, if the child approaches the image display apparatus 100. Specifically, the system optimization processor 120 determines whether the child's face is detected, and determines whether the child's face size, that is, the face size is equal to or greater than a predetermined value (S604). It is assumed that a child's face size in the reference position is previously registered on the image display device 100. The default value is a child's face size in the improper display position of the image display device 100.
[0066] As a result of the determination in Step S604, when the child's face is detected and when the child's face size [is equal to or greater than the predetermined value (YES in Step S604), the system optimization processor 120 determines if the number of child approach times detected in a past predetermined time is equal to or greater than a predetermined value (Step S606). When the number of child approach times detected in a predetermined time passed is less than the predetermined value (NOT in Step S606), the flow returns to processing in Step S602. The predetermined time can be properly adjusted. The predetermined value can be properly changed according to the adjusted predetermined time.
[0067] As a result of the determination in Step S606, when the number of child approach times detected in the past predetermined time is equal to or greater than the predetermined value (YES in Step S606), the system optimization processor 120 determines that the child approaches the image display device (Step S608).
[0068] Then the system optimization processor 120 calculates the system control information for the child guide implementation processing described above, and transmits the calculated system control information to the system controller 122. Based on the information from system control received from system optimization processor 120, system controller 122 performs luminance control of display panel 102, control of display content of display panel 102, control of sound content output from above speaker 108, sound volume control, and the like (Step S610). As illustrated in Fig. 7 (A), a message to guide the child to the optimal viewing position is displayed on the display panel 102 while the luminance of the display panel 102 is decreased. As illustrated in Fig. 7 (B), the message to guide the child to the optimal viewing position is displayed on the display panel 102 while the display panel 102 is darkened. As illustrated in Fig. 7 (C), a graph of an approach distance between the image display device 100 and the user and an alert message are displayed on the display panel 102 while the luminance of the display panel 102 is decreased .
[0069] As illustrated in Fig. 7 (D), an alert sound is produced from speaker 108 while the display panel is dimmed. The remote controller (not shown) must be vibrated along with the parts above the processing. The flow returned to processing at Step S602.
[0070] As a result of the determination in Step S604, when the child's face is detected and when the child's face size is less than the predetermined value (NOT in Step S604), the system optimization processor 120 determines whether the child having a face size smaller than the predetermined value or any face except a child's face was detected (Step S612). When the child having the face size smaller than the predetermined value or the face except a child's face was not detected (NOT in Step S612), the flow returns to processing in Step S602.
[0071] As a result of the determination in Step S612, when the child having the face size smaller than the predetermined value or the face except a child's face is detected (YES in Step S612), the system optimization processor 120 determines whether the number of child approach times detected in the past predetermined time is equal to or less than the predetermined value (Step S614). When the number of child approach times detected in the past predetermined time is greater than the predetermined value (NOT in Step S614), the flow returns to processing in Step S602.
[0072] As a result of the determination in Step S614, when the number of child approach times detected in the past predetermined time is equal to or less than the predetermined value (YES in Step 614), the system optimization processor 120 determines that the child does not approach the image display device 100 (Step S616).
[0073] Subsequently, when the system controller 122 performs the luminance control of the display panel 102, the control of the display contents of the display panel 102, the control of the emission of the sound contents from the speaker, the volume control, and the like based on system control information for child guide implementation processing, system optimization processor 120 calculates system control information to return to normal display processing, and transmits the calculated system control information to system controller 122. Based on system control information received from system optimization processor 120, system controller 122 performs luminance return processing and displays content from display panel 102 and outputs sound content from speaker 108 to the normal state (Step S618). Then, the flow returns to processing at Step S602.
[0074] According to the child approach prevention processing in Fig. 6, when the number of child approach times detected in the past predetermined time is equal to or greater than the predetermined value, the system optimization processor 120 determines the child to approach the image display device 100. System controller 122 performs the processing of guiding the child to the appropriate viewing position of the image display device 100, for example, processing in which the guide message is displayed on the display panel 102 while the luminance of the display panel 102 is reduced as shown in Fig. 7 (A). Thus, the child can be prevented from approaching the image display device 100 and from viewing and hearing in the inappropriate position for the image display device 100. Consequently, the weakening of vision and the generation of epilepsy, which are attributed to the fact that the child's focus is fixed when the child continuously sees the image as it approaches the image display apparatus 100 can be prevented. The fall of the image display device 100, which is caused by the child's excessive approach to the image display device 100, can be prevented. The risk that the child approaches the image display device 100 to break the display unit of the image display device 100 or the risk that the child will be injured by the broken display unit of the image display device 100 can be eliminated.
[0075] In the modality, as illustrated in Figs. 8 (A) to 8 (C), by calculating the user's viewing position, a variation in the reference face size [w0, h0] in the reference distance d0 can be corrected using the following correction table. For example, a data table of an average face size at the age of the user is previously stored from the characteristic information at the age of the user, the reference face size [w0, h0] is adjusted to a face size [wOC, hOC] smaller than the reference face size as illustrated in Fig. 8 (C) when the user is a child, and the reference face size [w0, h0] is adjusted to a face size [wOA, hOA] larger than the reference face size as illustrated in Fig. 8 (B) when the user is an adult.
[0076] In the modality, calculating the user's viewing position, when the user using the image display device 100, for example, a family of the installation location of the image display device 100 is previously registered on the display device image 100, the face size of each user must be recorded in a data table. Thus, the reference face size can be changed for each user. A method for recording the user's face size for each user can be implemented in such a way that the user's image is captured along with the distance information in conjunction with another distance sensor (not shown), so that the The user's image is captured after the user is guided a given distance, or so that the user's image is captured at the same distance as a scale that serves as a reference.
[0077] As illustrated in Fig. 9, even for the same user, the user's characteristic information is oscillated in time series. Thus, in the modality, when the system optimization processor 120 determines whether the user is a child or an adult, the determination is made based on a time series trend, that is, using a past determination history. For example, as illustrated in Fig. 9, it is assumed that characteristic information about the user is characteristic information about a child at a time point tl, that characteristic information about the user is characteristic information about an adult at a time point t2, that the characteristic information about the user is the characteristic information about a child at a time point t3. In this case, during a predetermined time, during a period between time point tl and time point t3, the determination that the user is a child can be made because of the determination that the user is a child can be made a given number of times, for example, at least two.
[0078] In the modality, the determination of the child's approach is made when the number of times the child's approach detects in the past predetermined time is equal or greater than the predetermined value. However, as illustrated in Figs. 10 (A) and 10 (B), the user may occasionally excessively approach the imaging unit 104 as shown at time point t3 and the user's face exits the viewing angle of the imaging unit 104, thus face is hardly detected. Thus, in the modality, using the pieces of information at the time points tl and t2 immediately before the time point t3 in which the user's face is no longer detected, that is, the past position calculation histories, the determination that the user approaches the image display apparatus 100 over the image forming range of the imaging unit 104 can be done by calculating or assuming the position at time point t3. When part of the user's body covers the imaging unit 104, accuracy of determining whether the aforementioned user approaches the image display apparatus 100 over the imaging range of the imaging unit 104 can be improved by , for example, a difference in the background or dynamic body detection (a size of motion area).
[0079] In the modality, When the determination of whether the user is a child or an adult is made using the characteristic information on the age of the user and the like, a rate of a false determination is reduced in the determination of the child by a combination of a determining whether the user is a child and determining whether the user is an adult. A method for reducing the rate of false determination in the determination of the child will be specifically described below.
[0080] As illustrated in Fig. 11, the following four cases are possible as the results of determining whether the user is a child.
[0081] Case A: True Positive (the determination that the user is a child has been correctly made)
[0082] Case B: False Negative (the determination that the user is a child was not made when such a determination should be made)
[0083] Case C: False Positive (the determination that the user is a child was made when such a determination should not be made)
[0084] Case D: True Negative (the determination that the user is not a child was correctly made)
[0085] In the modality, so that the determination that the child approaches the image display apparatus 100 is correctly made, case C, in which the determination that the user is a child is made when such determination should be made , needs to be eliminated in determining whether the user is a child. For example, it is assumed that 85% @ FPR10% is the accuracy of determining whether the user is a child. It is assumed here that FPR (False Positive Rate) = Case C ^ Case C + Case D), and Accuracy = Case A / (Case A + Case C).
[0086] In the case where the determination of whether a user is a child is made in 100 people who are really children and 150 people who are really adults in the accuracy of determination above, when it is determined that 100 people are children, 85 people fall in Case A, 15 people fall in Case B, 15 people fall in Case C, and 135 people fall in Case D. Consequently, 85 people are rightly determined to be children out of the 100 people who are really children, and 15 people who are falsely determined as children out of the 150 people who are really adults.
[0087] In the modality, as illustrated in Fig. 12, the determination of whether the user is an adult is made after determining whether the user is a child. For example, as described above, the 85 really children were determined to be children and the 15 really adults were determined to be children in determining whether the user is a child. In the case where they are then subject to the determination where the user is an adult, because the person who is really an adult is expected to be more likely to be determined to be an adult compared to the person who is actually a child, for example, this it can be expected that 10 people are falsely determined to be adults out of 85 really children and that 10 people are correctly determined to be adults out of 15 really adults. Consequently, the 75 people are correctly determined to be non-adults out of 85 really children, and 5 people are falsely determined to be non-adults out of the 15 really adults. Thus, in the modality, the determination of whether the user is a child is made by combining the determination of whether the user is a child and the determination of whether the user is an adult. Thus, as illustrated in Fig. 12, the determination accuracy of determining whether a user is a child can be increased from the determination accuracy of 85% with determining only if the user is a child the determination accuracy of 93.75 % with the combination of determining whether the user is a child and determining whether the user is an adult.
[0088] The above series of processing parts can be performed by either hardware or software. In the case of whether the series of processing parts can be performed by the software, a program constituting the software is installed from a program recording medium inside a computer incorporated in the dedicated hardware. Alternatively, the program is installed from a general purpose personal computer that can perform various functions by installing various programs.
[0089] The preferred embodiments of the present invention have been described above with reference to the accompanying drawings, while the present invention is not limited to the above examples, of course. A person skilled in the art may find various alternations and modifications within the scope of the claims that follow, and it should be understood that they will naturally come under the scope of the technique of the present invention. Reference Signal List 100 Image Display Device 102 Display Panel 104 Image Forming Unit 106 Sensor 108 Speaker 110 Controller 112 Image Receiving Unit 114 Image Processor 116 Viewing Status Analyzer 118 Image Status Recorder visualization 120 System optimization processor 122 System controller 132 User direction / distance calculator 134 User characteristic calculator
权利要求:
Claims (8)
[0001]
1. Display apparatus (100), comprising: a display unit (102) adapted to display images; an image forming unit (104) that captures a moving image in a predetermined range with respect to an image display direction; an image analyzer (114) that analyzes the moving image captured by the image forming unit (104) to detect a user's face in the moving image, in which the display apparatus (100) is adapted to calculate a distance between the imaging unit (104) and a user who must be guided to an appropriate viewing position; and, a display controller (122) adapted to control the display unit (102) to perform the display in order to guide the user to the appropriate display position; characterized by the fact that the display device (100) is adapted to: calculate the user's viewing position including the direction in relation to the optical axis of the imaging unit (104) and distance from the viewing position from a plane through the image forming unit (104) and perpendicular to the optical axis of the image forming unit (104), where the distance is calculated by comparing the user's face size detected in the moving image with a reference face size a reference face at a reference distance; determine whether the user is a child or an adult based on the face detected in the moving image; determine that the user should be guided to an appropriate viewing position when the user is a child and when the distance from the calculated viewing position is less than a predetermined minimum distance and the calculated viewing distance then falls within an alert area predetermined approach, in which previous calculations of the user's viewing positions are used if the user's face leaves the viewing angle of the image forming unit (104) assuming or calculating the viewing position from the previous calculations.
[0002]
2. Display device (100) according to claim 1, characterized in that the display controller (122) is adapted to control the display unit (102) displays a message that guides the user to the appropriate display position .
[0003]
3. Display device (100) according to claim 1 or 2, characterized by the fact that the display controller (122) is adapted to control the display unit (102) displays a graph illustrating a distance between the user and the display unit (102).
[0004]
Display device (100) according to any one of claims 1 to 3, characterized by the fact that the display controller (122) is adapted to control the display unit (102) to display a message that guides the user to the predetermined viewing position while dimming the display unit (102).
[0005]
Display apparatus (100) according to any one of claims 1 to 4, characterized by the fact that it further comprises a sound controller that causes a sound emitting unit (108) to emit a tone in order to guide the user to the appropriate viewing position when the detected user position is in the inappropriate viewing position and the user is a child.
[0006]
6. Display device (100) according to any one of claims 1 to 5, characterized by the fact that the display device (100) is adapted to determine whether the user is a child or an adult based on the face detected in the image on the move and a history of past determination.
[0007]
Display device (100) according to any one of claims 1 to 6, characterized in that the display device (100) is adapted to calculate the display position using past calculation history when a determination of whether the user must be guided to the appropriate viewing position by analyzing the image captured by the image formation unit (104).
[0008]
8. Control method, comprising: capturing a moving image in a predetermined range with respect to an image display direction by an image forming unit (104); analyze the captured moving image to detect a user's face in the moving image; calculate distance between the imaging unit (104) and the user who must be guided to an appropriate viewing position; and, causing a display unit (102) to perform display in order to guide the user to the appropriate viewing position; characterized by the fact that the calculating step includes: calculating the user's viewing position including the direction in relation to the optical axis of the imaging unit (104) and distance from the viewing position from a plane through the imaging unit image formation (104) and perpendicular to the optical axis of the image formation unit (104), in which the distance is calculated by comparing the size of the user's face detected in the moving image with a reference face size of a face of reference to a reference distance; determine whether the user is a child or an adult based on the face detected in the moving image; determine that the user should be guided to an appropriate viewing position when the user is a child and when the distance from the calculated viewing position is less than a predetermined minimum distance and the calculated viewing distance then falls within an alert area predetermined approach, in which previous calculations of the user's viewing positions are used if the user's face leaves the viewing angle of the image forming unit (104) assuming or calculating the viewing position from the previous calculations.
类似技术:
公开号 | 公开日 | 专利标题
BR112012004830B1|2020-10-27|DISPLAY APPLIANCE AND CONTROL METHOD
US10796178B2|2020-10-06|Method and device for face liveness detection
CN104137028B|2019-01-25|Control the device and method of the rotation of displayed image
EP2718871B1|2019-08-07|Enhanced face recognition in video
US8730164B2|2014-05-20|Gesture recognition apparatus and method of gesture recognition
KR102333101B1|2021-12-01|Electronic device for providing property information of external light source for interest object
WO2015172514A1|2015-11-19|Image acquisition device and method
BR112012005231A2|2020-08-04|display device and control method
US9298246B2|2016-03-29|Information processing device, system, and information processing method
JP2013058828A|2013-03-28|Smile determination device and method
BR112012004835A2|2020-07-28|display apparatus and method
KR101316805B1|2013-10-11|Automatic face tracing and recognition method and system
TWI705354B|2020-09-21|Eye tracking apparatus and light source control method thereof
KR20110014450A|2011-02-11|Apparatus and method for improving face recognition ratio
JP6299113B2|2018-03-28|Image display device, image display control method, and image display control program
KR20110006062A|2011-01-20|System and method for face recognition
KR101961266B1|2019-03-25|Gaze Tracking Apparatus and Method
US20120081533A1|2012-04-05|Real-time embedded vision-based eye position detection
JP2020140637A|2020-09-03|Pupil detection device
KR101476503B1|2014-12-26|Interaction providing apparatus and method for wearable display device
JP7007323B2|2022-01-24|Monitoring display device
JP2020153083A|2020-09-24|Automatic door device, automatic door control method, control device, and information presentation device
JP2020181347A|2020-11-05|Monitoring and display device
JP5720758B2|2015-05-20|Display device and control method
KR20210061312A|2021-05-27|Projector meditation system with Buddha-Screen
同族专利:
公开号 | 公开日
JP2011059528A|2011-03-24|
CN102577425A|2012-07-11|
CN104602120B|2019-01-15|
CN104602120A|2015-05-06|
CN102577425B|2015-02-04|
KR101719845B1|2017-03-24|
US9298258B2|2016-03-29|
EP2477183B1|2018-03-28|
US8913007B2|2014-12-16|
US20120218179A1|2012-08-30|
BR112012004830A2|2016-03-15|
RU2549165C2|2015-04-20|
BR112012004830B8|2020-11-24|
EP2477183A1|2012-07-18|
EP2477183A4|2013-09-11|
IN2012DN01889A|2015-07-24|
US20150054738A1|2015-02-26|
RU2012108121A|2013-09-10|
WO2011030624A1|2011-03-17|
JP5418093B2|2014-02-19|
KR20120064070A|2012-06-18|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JPH07118008B2|1989-08-18|1995-12-18|松下電器産業株式会社|Image transforming method and image transforming apparatus|
AT192275T|1993-12-03|2000-05-15|Terumo Corp|STEREOSCOPIC IMAGE DISPLAY SYSTEM|
JPH07303195A|1994-05-09|1995-11-14|Kazuo Matsumura|Controller for image display device|
JP3350721B2|1994-10-07|2002-11-25|シャープ株式会社|Clothing amount measuring device and clothing amount measuring method|
GB2306826A|1995-10-18|1997-05-07|Sharp Kk|Display, method of calibrating an observer tracking display and observer tracking autostereoscopic 3D display|
JP3075971B2|1995-12-28|2000-08-14|三洋電機株式会社|Radio receiver|
JP3968477B2|1997-07-07|2007-08-29|ソニー株式会社|Information input device and information input method|
US6076928A|1998-06-15|2000-06-20|Fateh; Sina|Ideal visual ergonomic system for computer users|
JP2000152109A|1998-11-11|2000-05-30|Matsushita Electric Ind Co Ltd|Television receiver|
JP2001016514A|1999-06-30|2001-01-19|Matsushita Electric Ind Co Ltd|Display device with recognizing function|
TW455769B|1999-08-18|2001-09-21|Jian Huei Jiuan|Eye-protection method and apparatus set up for monitor screen|
JP3788394B2|2002-06-13|2006-06-21|ソニー株式会社|Imaging apparatus and imaging method, and display apparatus and display method|
US8123616B2|2003-03-25|2012-02-28|Igt|Methods and apparatus for limiting access to games using biometric data|
JP2005044330A|2003-07-24|2005-02-17|Univ Of California San Diego|Weak hypothesis generation device and method, learning device and method, detection device and method, expression learning device and method, expression recognition device and method, and robot device|
JP3781028B2|2003-10-01|2006-05-31|松下電器産業株式会社|Eye imaging device|
US7176888B2|2004-03-23|2007-02-13|Fujitsu Limited|Selective engagement of motion detection|
RU2370817C2|2004-07-29|2009-10-20|Самсунг Электроникс Ко., Лтд.|System and method for object tracking|
JP3918854B2|2004-09-06|2007-05-23|オムロン株式会社|Substrate inspection method and substrate inspection apparatus|
KR100986660B1|2004-10-20|2010-10-11|후지쓰 텐 가부시키가이샤|Display device|
JP4013943B2|2004-11-22|2007-11-28|船井電機株式会社|Broadcast signal reception system|
WO2006136958A2|2005-01-25|2006-12-28|Dspv, Ltd.|System and method of improving the legibility and applicability of document pictures using form based image enhancement|
KR20070119018A|2005-02-23|2007-12-18|크레이그 써머스|Automatic scene modeling for the 3d camera and 3d video|
JP2007006165A|2005-06-24|2007-01-11|Fujifilm Holdings Corp|Imaging device, imaging method, and imaging program|
JP4165540B2|2005-06-27|2008-10-15|セイコーエプソン株式会社|How to adjust the position of the projected image|
JP4595750B2|2005-08-29|2010-12-08|ソニー株式会社|Image processing apparatus and method, and program|
JP4225307B2|2005-09-13|2009-02-18|船井電機株式会社|Television receiver|
US8218080B2|2005-12-05|2012-07-10|Samsung Electronics Co., Ltd.|Personal settings, parental control, and energy saving control of television with digital video camera|
JP2007158787A|2005-12-06|2007-06-21|Sharp Corp|Display unit and display control method|
JP2007236668A|2006-03-09|2007-09-20|Matsushita Electric Ind Co Ltd|Photographic device, authentication device and photographic method|
US8340365B2|2006-11-20|2012-12-25|Sony Mobile Communications Ab|Using image recognition for controlling display lighting|
JP5045136B2|2007-02-14|2012-10-10|三菱電機株式会社|Large screen display device|
US20080316372A1|2007-06-20|2008-12-25|Ning Xu|Video display enhancement based on viewer characteristics|
CN101409784A|2007-10-10|2009-04-15|联想有限公司|Camera device and information-prompting apparatus|
US9986293B2|2007-11-21|2018-05-29|Qualcomm Incorporated|Device access control|
NO331839B1|2008-05-30|2012-04-16|Cisco Systems Int Sarl|Procedure for displaying an image on a display|
WO2010021373A1|2008-08-22|2010-02-25|ソニー株式会社|Image display device, control method and computer program|
US20100295782A1|2009-05-21|2010-11-25|Yehuda Binder|System and method for control based on face ore hand gesture detection|
CN102804786A|2009-06-16|2012-11-28|Lg电子株式会社|Viewing range notification method and TV receiver for implementing the same|
JP5263092B2|2009-09-07|2013-08-14|ソニー株式会社|Display device and control method|
JP5568929B2|2009-09-15|2014-08-13|ソニー株式会社|Display device and control method|
JP5296218B2|2009-09-28|2013-09-25|株式会社東芝|3D image display method and 3D image display apparatus|WO2012028884A1|2010-09-02|2012-03-08|Elliptic Laboratories As|Motion feedback|
JP5011431B2|2010-12-21|2012-08-29|株式会社東芝|Video signal processing device, processing method, and video display device|
JP5810590B2|2011-04-01|2015-11-11|セイコーエプソン株式会社|Projector and projection method|
JP2015200682A|2012-08-30|2015-11-12|シャープ株式会社|Display device, and control method thereof|
JP6058978B2|2012-11-19|2017-01-11|サターン ライセンシング エルエルシーSaturn Licensing LLC|Image processing apparatus, image processing method, photographing apparatus, and computer program|
KR102121592B1|2013-05-31|2020-06-10|삼성전자주식회사|Method and apparatus for protecting eyesight|
CN104244053A|2013-06-18|2014-12-24|联想有限公司|Output control method and electronic device|
JP2015082753A|2013-10-23|2015-04-27|キヤノン株式会社|Information processing system, imaging device, and control method therefor, and program|
KR20150104711A|2014-03-06|2015-09-16|엘지전자 주식회사|Video display device and operating method thereof|
KR101658629B1|2014-05-02|2016-09-22|삼성전자주식회사|Dispaly apparatus and controlling method thereof|
KR102151206B1|2014-07-07|2020-09-03|삼성디스플레이 주식회사|Mobile terminal and method for controlling the same|
CN106469036B|2015-08-14|2021-02-05|腾讯科技(深圳)有限公司|Information display method and client|
CN105549967A|2015-12-09|2016-05-04|北京用友政务软件有限公司|Myopia prevention method of intelligent electronic device|
JP6973394B2|2016-08-01|2021-11-24|ソニーグループ株式会社|Information processing equipment, information processing methods, and programs|
KR101758040B1|2017-01-03|2017-07-14|셀텍|Apparatus for controlling television viewing|
IT201700044945A1|2017-04-26|2018-10-26|Sebastiano Borrelli|television hight interactive sistem|
JP2021107886A|2019-12-27|2021-07-29|富士フイルムビジネスイノベーション株式会社|Controller and program|
JP2020191665A|2020-08-03|2020-11-26|パラマウントベッド株式会社|Image display controller, image display system, and program|
法律状态:
2019-01-15| B06F| Objections, documents and/or translations needed after an examination request according art. 34 industrial property law|
2019-07-23| B06U| Preliminary requirement: requests with searches performed by other patent offices: suspension of the patent application procedure|
2020-06-02| B09A| Decision: intention to grant|
2020-10-27| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 27/10/2020, OBSERVADAS AS CONDICOES LEGAIS. |
2020-11-03| B09W| Decision of grant: rectification|Free format text: CORRECAO DO TITULO. |
2020-11-24| B16C| Correction of notification of the grant|Free format text: REF. RPI 2599 DE 27/10/2020 QUANTO AO TITULO. |
优先权:
申请号 | 申请日 | 专利标题
JP2009-210988|2009-09-11|
JP2009210988A|JP5418093B2|2009-09-11|2009-09-11|Display device and control method|
PCT/JP2010/062310|WO2011030624A1|2009-09-11|2010-07-22|Display device and control method|
[返回顶部]